A Tunable Loss Function for Robust Classification: Calibration, Landscape, and Generalization

نویسندگان

چکیده

We introduce a tunable loss function called $\alpha $ -loss, parameterized by \in (0,\infty]$ , which interpolates between the exponential ( = 1/2$ ), log-loss 1$ and 0–1 \infty for machine learning setting of classification. Theoretically, we illustrate fundamental connection -loss Arimoto conditional entropy, verify classification-calibration in order to demonstrate asymptotic optimality via Rademacher complexity generalization techniques, build-upon notion strictly local quasi-convexity quantitatively characterize optimization landscape -loss. Practically, perform class imbalance, robustness, classification experiments on benchmark image datasets using convolutional-neural-networks. Our main practical conclusion is that certain tasks may benefit from tuning away this end provide simple heuristics practitioner. In particular, navigating hyperparameter can readily superior model robustness label flips > ) sensitivity imbalanced classes < ).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

the innovation of a statistical model to estimate dependable rainfall (dr) and develop it for determination and classification of drought and wet years of iran

آب حاصل از بارش منبع تأمین نیازهای بی شمار جانداران به ویژه انسان است و هرگونه کاهش در کم و کیف آن مستقیماً حیات موجودات زنده را تحت تأثیر منفی قرار می دهد. نوسان سال به سال بارش از ویژگی های اساسی و بسیار مهم بارش های سالانه ایران محسوب می شود که آثار زیان بار آن در تمام عرصه های اقتصادی، اجتماعی و حتی سیاسی- امنیتی به نحوی منعکس می شود. چون میزان آب ناشی از بارش یکی از مولفه های اصلی برنامه ...

15 صفحه اول

Quality Loss Function – A Common Methodology for Three Cases

The quality loss function developed by Genichi Taguchi considers three cases, nominal-thebest, smaller-the-better, and larger-the-better. The methodology used to deal with the larger-thebetter case is slightly different than the other two cases. This research employs a term called target-mean ratio to propose a common formula for all three cases to bring about similarity among them. The target-...

متن کامل

The Most Robust Loss Function for Boosting

Boosting algorithm is understood as the gradient descent algorithm of a loss function. It is often pointed out that the typical boosting algorithm, Adaboost, is seriously affected by the outliers. In this paper, loss functions for robust boosting are studied. Based on a concept of the robust statistics, we propose a positive-part-truncation of the loss function which makes the boosting algorith...

متن کامل

A More General Robust Loss Function

We present a loss function which can be viewed as a generalization of many popular loss functions used in robust statistics: the Cauchy/Lorentzian, Welsch, and generalized Charbonnier loss functions (and by transitivity the L2, L1, L1-L2, and pseudo-Huber/Charbonnier loss functions). We describe and visualize this loss, and document several of its useful properties. Many problems in statistics ...

متن کامل

The C-loss function for pattern classification

This paper presents a new loss function for neural network classification, inspired by the recently proposed similarity measure called Correntropy. We show that this function essentially behaves like the conventional square loss for samples that are well within the decision boundary and have small errors, and L0 or counting norm for samples that are outliers or are difficult to classify. Depend...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Information Theory

سال: 2022

ISSN: ['0018-9448', '1557-9654']

DOI: https://doi.org/10.1109/tit.2022.3169440